Mark Zuckerberg is chief executive officer of Facebook.
When you build services that connect billions of people across countries and cultures, you’re going to see all of the good that humanity can do, and you’re also going to see people try to abuse those services in every way possible. Our responsibility at Facebook is to amplify the good and mitigate the bad.
This is especially true when it comes to elections. Free and fair elections are the heart of every democracy. During the 2016 election, we were actively looking for traditional cyberattacks, and we found them. What we didn’t find until later were foreign actors running coordinated campaigns to interfere with America’s democratic process. Since then, we’ve focused on improving our defenses and making it much harder for anyone to interfere in elections.
Key to our efforts has been finding and removing fake accounts — the source of much of the abuse, including misinformation. Bad actors can use computers to generate these in bulk. But with advances in artificial intelligence, we now block millions of fake accounts every day as they are being created so they can’t be used to spread spam, false news or inauthentic ads.
Increased transparency in our advertising systems is another area where we have also made progress. You can now see all the ads an advertiser is running — even if they aren’t targeted to you. Anyone who wants to run political or issue ads in the United States on Facebook must verify their identity. All political and issue ads must also make clear who paid for them, in the same way as TV or newspaper advertisements. But we’ve gone even further by putting all these ads in a public archive, which anyone can search to see how much was spent on each individual ad and the audience it reached. This greater transparency will increase responsibility and accountability for advertisers.
As we’ve seen from previous elections, misinformation is a real challenge. A big part of the solution is getting rid of fake accounts. But it’s also about attacking the spammers’ economic incentives to create false news in the first place. And where posts are flagged as potentially false, we pass them to independent fact-checkers — such as the Associated Press and the Weekly Standard — to review, and we demote posts rated as false, which means they lose 80 percent of future traffic.
We’re not working alone. After 2016, it became clear that everyone — governments, tech companies and independent experts — needs to do a better job of sharing the signals and information they have to prevent this kind of abuse. These bad actors don’t restrict themselves to one service, and we shouldn’t approach the problem in silos, either. That’s why we’re working more closely with other technology companies on the cybersecurity threats we all face, and we’ve worked with law enforcement to take down accounts in Russia.
One of the biggest changes we’ve made over the past year is not to wait for reports of suspicious activity. Instead, we look proactively for potentially harmful election-related content, such as pages registered to a foreign entity that post divisive content to sow mistrust and drive people apart. When we find them, our security team manually reviews the accounts to see whether they violate our policies. If they do, we quickly remove them. For example, we recently took down a network of accounts in Brazil that was hiding its identity and spreading misinformation ahead of the country’s presidential elections in October.
For the U.S. midterm elections we’re also using a new tool we tested in the Alabama Senate special election last year to identify political interference more quickly. This enabled us to find and remove foreign political spammers who’d previously flown under the radar. And last month, we took down hundreds of pages, groups and accounts for creating networks that were deliberately misleading people about their identities and intentions. Some originated in Iran and others in Russia.
I’m often asked how confident I feel about the midterms. We’ve made a lot of progress, as our work during the French, German, Mexican and Italian elections has shown. In each case, we identified and removed fake accounts and bad content leading up to the elections, and in Germany we worked directly with the government to share information about potential threats. The investments we continue to make in people and technology will help us improve even further. But companies such as Facebook face sophisticated, well-funded adversaries who are getting smarter over time, too. It’s an arms race, and it will take the combined forces of the U.S. private and public sectors to protect America’s democracy from outside interference.
Facebook CEO Mark Zuckerberg testifies before a House committee in April. (Andrew Harnik/AP)
This content was originally published here.